Apple released its Foundation Models framework to developers with iOS 26, iPadOS 26 and macOS 26. It enables third-party apps to tap into the same on-device large language model that powers Apple Intelligence, the iPhone giant said Monday.
“We’re excited to see developers around the world already bringing privacy-protected intelligence features into their apps,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations. “The in-app experiences they’re creating are expansive and creative, showing just how much opportunity the Foundation Models framework opens up.”
Apple’s Foundation Models framework brings on-device AI to third-party apps
Built into iOS 26, iPadOS 26 and macOS 26, the framework allows developers to build AI-powered features that run entirely on device, protecting user privacy while working offline and incurring no inference costs, Apple noted. Its announcement showcased how developers across health, fitness, education and productivity categories already use the framework to create innovative experiences that would have previously required cloud-based infrastructure or been impossible to build at all.
Privacy-first AI comes to the App Store
Unlike traditional AI implementations that send user data to remote servers for processing, the Foundation Models framework processes everything locally on the user’s device. This architecture means personal information like workout logs, journal entries and study notes never leave the device. That helps address growing privacy concerns around AI applications.
Apple built the framework around a 3-billion-parameter model tightly integrated with Swift. That makes it accessible to developers already working within Apple’s ecosystem. Apple provides guided generation to ensure consistent formatting, and developers can create tools that allow the model to call back into the app when it needs additional context.
Health and fitness apps lead early adoption
Fitness apps have been quick to embrace the framework’s capabilities. SmartGym now lets users describe a workout in natural language and automatically transforms it into a structured routine complete with sets, reps, and rest times. The app’s Smart Trainer feature explains the reasoning behind its recommendations, helping users understand why it might suggest increasing weight or adjusting their routine.
“The Foundation Models framework enables us to deliver on-device features that were once impossible,” said Matt Abras, SmartGym’s CEO. “It’s simple to implement, yet incredibly powerful in its capabilities.”

Photo: Apple
Journaling app Stoic uses the framework to generate personalized prompts based on users’ recent entries and emotional state. If someone logs poor sleep or a low mood, they receive compassionate, encouraging messages. The app can also organize related entries and provide summaries of journal entries using natural language search — all while keeping every word private on the device.
“Features that once required heavy back-end infrastructure now run natively on device with minimal setup,” said Maciej Lobodzinski, Stoic’s founder. “That let our small team deliver huge value fast while keeping every user’s data private.”
Other fitness apps have found creative applications. SwingVision analyzes tennis and pickleball videos to provide specific feedback on technique. 7 Minute Workout creates custom routines based on injuries or upcoming events. Train Fitness recommends alternative exercises when specific equipment is unavailable.
Education apps get conversational interfaces
The education sector has found particularly compelling use cases. CellWalk, an immersive biology app that lets users explore 3D cellular structures, now provides conversational explanations of scientific terms tailored to each learner’s knowledge level. The app uses tool calling to ground its responses in accurate scientific information.
“Our visuals have always been interactive, but with the Foundation Models framework, the text itself comes alive,” said Tim Davison, CellWalk’s developer. “Scientific data hidden in our app becomes a dynamic system that adapts to each learner.”
Grammo’s AI tutor explains why grammar exercise answers are incorrect and generates new questions on demand for deeper practice. Vocabulary automatically categorizes saved words into themes like “Verbs” or “Anatomy” using natural language understanding. Platzi, a Spanish-language education platform, lets users ask questions about lesson content and receive instant, contextual responses.
Productivity apps become more intuitive

Photo: Apple
Productivity apps are using the framework to eliminate friction from everyday tasks. Stuff, a to-do list app, now understands natural language input like “Call Sophia Friday” and automatically populates dates, tags, and lists. Its Listen Mode converts spoken thoughts into organized tasks. And Scan Mode captures handwritten to-dos from photos.
“Running entirely on device, it’s powerful, predictable, and remarkably performant,” said Austin Blake, Stuff’s developer. “Its simplicity made it possible for me to launch both Listen Mode and Scan Mode together in a single release — something that would’ve taken much longer otherwise.”
Video editing app VLLO combines the Foundation Models framework with Apple’s Vision framework to analyze video content and automatically suggest appropriate background music and stickers for each scene. Signeasy generates document summaries and answers specific questions about contracts and agreements. OmniFocus 4 can now generate entire projects with next steps, such as creating a packing list for an upcoming trip.
Availability and requirements
The Foundation Models framework is available now with iOS 26, iPadOS 26, and macOS 26. It works on any Apple Intelligence-compatible device with Apple Intelligence enabled. Apple Intelligence currently supports English, French, German, Italian, Portuguese (Brazil), Spanish, Chinese (simplified), Japanese and Korean, though some features may not be available in all regions or languages.
For developers, the barrier to entry is relatively low thanks to Swift integration and straightforward APIs. The framework’s guided generation ensures reliable output formatting, while tool calling capabilities allow models to request additional information from apps when needed. Most importantly, the free inference and offline capabilities mean developers can build sophisticated AI features without infrastructure costs or connectivity requirements.